The SINDy algorithm has been successfully used to identify the governing equations of dynamical systems from time series data. In this paper, we argue that this makes SINDy a potentially useful tool for causal discovery and that existing tools for causal discovery can be used to dramatically improve the performance of SINDy as tool for robust sparse modeling and system identification. We then demonstrate empirically that augmenting the SINDy algorithm with tools from causal discovery can provides engineers with a tool for learning causally robust governing equations.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Deep neural networks have been successfully adopted to diverse domains including pathology classification based on medical images. However, large-scale and high-quality data to train powerful neural networks are rare in the medical domain as the labeling must be done by qualified experts. Researchers recently tackled this problem with some success by taking advantage of models pre-trained on large-scale general domain data. Specifically, researchers took contrastive image-text encoders (e.g., CLIP) and fine-tuned it with chest X-ray images and paired reports to perform zero-shot pathology classification, thus completely removing the need for pathology-annotated images to train a classification model. Existing studies, however, fine-tuned the pre-trained model with the same contrastive learning objective, and failed to exploit the multi-labeled nature of medical image-report pairs. In this paper, we propose a new fine-tuning strategy based on sentence sampling and positive-pair loss relaxation for improving the downstream zero-shot pathology classification performance, which can be applied to any pre-trained contrastive image-text encoders. Our method consistently showed dramatically improved zero-shot pathology classification performance on four different chest X-ray datasets and 3 different pre-trained models (5.77% average AUROC increase). In particular, fine-tuning CLIP with our method showed much comparable or marginally outperformed to board-certified radiologists (0.619 vs 0.625 in F1 score and 0.530 vs 0.544 in MCC) in zero-shot classification of five prominent diseases from the CheXpert dataset.
translated by 谷歌翻译
Point-of-Care Ultrasound (POCUS) refers to clinician-performed and interpreted ultrasonography at the patient's bedside. Interpreting these images requires a high level of expertise, which may not be available during emergencies. In this paper, we support POCUS by developing classifiers that can aid medical professionals by diagnosing whether or not a patient has pneumothorax. We decomposed the task into multiple steps, using YOLOv4 to extract relevant regions of the video and a 3D sparse coding model to represent video features. Given the difficulty in acquiring positive training videos, we trained a small-data classifier with a maximum of 15 positive and 32 negative examples. To counteract this limitation, we leveraged subject matter expert (SME) knowledge to limit the hypothesis space, thus reducing the cost of data collection. We present results using two lung ultrasound datasets and demonstrate that our model is capable of achieving performance on par with SMEs in pneumothorax identification. We then developed an iOS application that runs our full system in less than 4 seconds on an iPad Pro, and less than 8 seconds on an iPhone 13 Pro, labeling key regions in the lung sonogram to provide interpretable diagnoses.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
语义上有意义的句子嵌入对于自然语言处理中的许多任务都很重要。为了获得此类嵌入,最近的研究探讨了利用验证语言模型(PLM)作为训练语料库的合成生成数据的想法。但是,PLM通常会产生与人类写的句子大不相同的句子。我们假设将所有这些合成示例同样地用于训练深层神经网络可能会对学习语义上有意义的嵌入产生不利影响。为了分析这一点,我们首先训练一个分类器来识别机器编写的句子,并观察到机器编写的句子的语言特征与人写的句子的语言特征大不相同。基于此,我们提出了一种新颖的方法,该方法首先训练分类器来衡量每个句子的重要性。然后,分类器的蒸馏信息用于训练可靠的句子嵌入模型。通过对四个现实世界数据集的广泛评估,我们证明了我们的合成数据训练的模型可以很好地概括并表现优于现有基线。我们的实现可在https://github.com/ddehun/coling2022_reweighting_sts上公开获得。
translated by 谷歌翻译
尽管电子保健记录(EHR)丰富,但其异质性限制了医疗数据在构建预测模型中的利用。为了应对这一挑战,我们提出了通用医疗预测框架(UNIHPF),该框架不需要医疗领域知识和对多个预测任务的最小预处理。实验结果表明,UNIHPF能够构建可以从不同EHR系统处理任何形式的医疗数据的大规模EHR模型。我们的框架在多源学习任务(包括转移和汇总学习)中大大优于基线模型,同时在单个医疗数据集中接受培训时也会显示出可比的结果。为了凭经验证明我们工作的功效,我们使用各种数据集,模型结构和任务进行了广泛的实验。我们认为,我们的发现可以为对EHR的多源学习提供进一步研究提供有益的见解。
translated by 谷歌翻译
联合学习(FL)是一个活跃的研究领域。采用FL的最合适区域之一是医疗领域,必须尊重患者隐私。但是,先前的研究并未完全考虑谁最有可能在医疗领域使用FL。渴望采用FL的不是医院,而是想要开发具有真实患者记录的机器学习模型的服务提供商。此外,服务提供商希望以最低成本的可能性来最大程度地提高模型的性能。在这项工作中,我们提出了FL方法的经验基准,考虑了三个现实世界数据集的性能和货币成本:电子健康记录,皮肤癌图像和心电图数据集。我们还建议使用近端正则化的联合学习,除了局部归一化(FEDPXN),该学习使用FEDPROX和FEDBN的简单组合优于所有其他FL算法,而仅消耗比最高效率的方法稍大一些。
translated by 谷歌翻译
在远程多机器人自主探索任务(例如搜索和响应)中,语义对象映射在不确定的,感知下降的环境中是重要且具有挑战性的。在此类任务期间,需要高度召回,避免缺少真正的目标对象,而高精度对于避免在假阳性上浪费宝贵的操作时间也至关重要。鉴于视觉感知算法的最新进展,前者在很大程度上可以自主解决,但是如果没有人类操作员的监督,后者很难解决。但是,诸如任务时间,计算要求,网络网络带宽等诸如操作限制可能使操作员的任务变得不可行,除非得到适当管理。我们提出了早期的召回,较晚的精度(Earlap)语义对象映射管道,以解决此问题。 Earlap在DARPA Subterranean Challenge中被Team Costar使用,在那里成功发现了机器人团队遇到的所有工件。我们将在各种数据集上讨论Earlap的这些结果和性能。
translated by 谷歌翻译
在图像分类中,“ debiasing”旨在训练分类器,以免对数据集偏差,数据样本的外围属性与目标类别之间的强相关性。例如,即使数据集中的青蛙类主要由具有沼泽背景的青蛙图像组成(即,偏见与一致的样本),也应该能够在海滩上正确地对青蛙进行正确分类(即,偏见的样品, )。最近的辩论方法通常使用两个组件进行偏见,一个有偏见的模型$ f_b $和一个模型$ f_d $。 $ f_b $经过培训,可以专注于偏见的样本(即过度适合偏见),而$ f_d $主要通过专注于$ f_b $未能学习的样品,主要接受了偏见的样本培训,导致$ f_d $。不太容易受到数据集偏差的影响。虽然最先进的偏见技术旨在更好地培训$ f_d $,但我们专注于培训$ f_b $,这是迄今为止被忽视的组件。我们的实证分析表明,从$ f_b $的培训设置中删除偏见的样本对于改善$ f_d $的偏见性能很重要。这是由于以下事实:偏置冲突样品会干扰$ f_b $的偏见,因为这些样本不包括偏差属性。为此,我们提出了一种简单而有效的数据样本选择方法,该方法可以删除偏置冲突的样本,以构建一个偏置放大数据集用于培训$ f_b $。我们的数据示例选择方法可以直接应用于现有的基于重新加权的偏差方法,从而获得一致的性能提升并实现合成和现实世界数据集的最新性能。
translated by 谷歌翻译